7,038 research outputs found

    Objective prior for the number of degrees of freedom of a t distribution

    Get PDF
    In this paper, we construct an objective prior for the degrees of freedom of a t distribution, when the parameter is taken to be discrete. This parameter is typically problematic to estimate and a problem in objective Bayesian inference since improper priors lead to improper posteriors, whilst proper priors may dom- inate the data likelihood. We find an objective criterion, based on loss functions, instead of trying to define objective probabilities directly. Truncating the prior on the degrees of freedom is necessary, as the t distribution, above a certain number of degrees of freedom, becomes the normal distribution. The defined prior is tested in simulation scenarios, including linear regression with t-distributed errors, and on real data: the daily returns of the closing Dow Jones index over a period of 98 days

    Sampling the Dirichlet Mixture Model with Slices

    Get PDF
    We provide a new approach to the sampling of the well known mixture of Dirichlet process model. Recent attention has focused on retention of the random distribution function in the model, but sampling algorithms have then suffered from the countably infinite representation these distributions have. The key to the algorithm detailed in this paper, which also keeps the random distribution functions, is the introduction of a latent variable which allows a finite number, which is known, of objects to be sampled within each iteration of a Gibbs sampler.Bayesian Nonparametrics, Density estimation, Dirich-let process, Gibbs sampler, Slice sampling.

    Asymptotically minimax empirical Bayes estimation of a sparse normal mean vector

    Full text link
    For the important classical problem of inference on a sparse high-dimensional normal mean vector, we propose a novel empirical Bayes model that admits a posterior distribution with desirable properties under mild conditions. In particular, our empirical Bayes posterior distribution concentrates on balls, centered at the true mean vector, with squared radius proportional to the minimax rate, and its posterior mean is an asymptotically minimax estimator. We also show that, asymptotically, the support of our empirical Bayes posterior has roughly the same effective dimension as the true sparse mean vector. Simulation from our empirical Bayes posterior is straightforward, and our numerical results demonstrate the quality of our method compared to others having similar large-sample properties.Comment: 18 pages, 3 figures, 3 table
    corecore